As per the MoU, Ericsson Research will actively support and engage in all research endeavours undertaken by CeRAI. The partnership is expected to make significant contributions to the development of ethical and responsible AI practices in the evolving technological landscape in the countryAI 

IIT-Madras’s CeRAI and Ericsson Collaborate to Promote Responsible AI in 5G and 6G Technologies

IIT-Madras’ Center for Responsible AI (CeRAI) and Ericsson have entered into a partnership with the aim of advancing the field of Responsible AI. The collaboration was officially announced at the symposium “Responsible AI for Networks of the Future” held on the campus of the leading institutes on Monday.

The highlight of the event was the agreement signed by Ericsson naming CeRAI as a “Platinum Consortium Member” for five years. In accordance with this Memorandum of Understanding (MoU), Ericsson Research actively supports and participates in all CeRAI research projects.

IIT-Madras Center for Responsible Artificial Intelligence is recognized as an interdisciplinary research center with a vision to be a leading basic and applied research institute for responsible artificial intelligence. Its immediate goal is to deploy AI systems in the Indian ecosystem while ensuring ethical and responsible AI practices.

This partnership between CeRAI and Ericsson is expected to contribute significantly to the development of ethical and responsible AI practices in the country’s evolving technology landscape.

What is responsible artificial intelligence

Responsible artificial intelligence is an approach to the development and implementation of artificial intelligence systems in a safe, reliable and ethical way. It’s not just about following rules or guidelines, it’s about taking a thoughtful and purposeful approach to AI development and deployment. It is about considering the potential risks and benefits of AI and ensuring that AI systems are used in a way that is fair, just and beneficial to all.

The importance of artificial intelligence research has grown extremely important in recent years, especially in connection with the future 6G networks, which are controlled by artificial intelligence algorithms. Magnus Frodigh, Ericsson’s global head of research, emphasized the importance of responsible artificial intelligence in the development of 6G networks. He emphasized that while AI-driven sensors will connect humans and machines, responsible AI practices are necessary to ensure trust, fairness and privacy compliance.

Speaking at the symposium, IIT-Madras Dean of Industrial Consulting and Sponsored Research Prof Manu Santhanam expressed optimism about the collaboration and noted that AI research will shape business tools in the future. He emphasized IIT-Madras’ commitment to impactful translation work in collaboration with the industry.

Prof. B Ravindran, CeRAI, IIT-Madras, and Robert Bosch Center for Data Science and AI (RBCDSAI, IIT-Madras) elaborated on the partnership and noted that the networks of the future will facilitate high-performance AI systems.

Professor Ravindran emphasized the incorporation of responsible AI principles into such systems from the outset. He also emphasized that with 5G and 6G networks, new research is needed to ensure that AI models are explainable and can provide performance guarantees suitable for different applications.

Some of the institute’s projects showcased during the event were:

  • Large-Language Models (LLM) in Healthcare: This project focuses on detecting biases introduced by large language models, developing scoring methods to assess their applicability in the real world, and reducing biases in LLMs. Custom scoring methods are designed based on the Risk Management Framework (RMF) proposed by the National Institute of Standards and Technology (NIST).
  • Participatory AI: This project addresses the black box nature of AI at different stages of its lifecycle, from pre-development to post-deployment and audit. Drawing inspiration from areas such as urban planning and forest rights, the project explores governance mechanisms that allow stakeholders to provide constructive input, improving the customization, accuracy and reliability of AI while addressing potential negative impacts.
  • Generative artificial intelligence models based on the attention mechanism: Generative artificial intelligence models based on the attention mechanism have gained attention for their exceptional performance in various tasks. However, these patterns are often complex and challenging to interpret. This project focuses on improving the interpretability of attention-based models, understanding their limitations, and identifying patterns they tend to learn from data.
  • Multi-Agent Enforcement Learning for Compromise and Conflict Resolution in Intent-Based Networks: With the increasing importance of intent-based control in telecommunication networks, this project explores the MARL (Multi-Agent Enforcement Learning) approach to handle complex coordination and conflicts. for online purposes. It aims to utilize explainability and causality for the joint actions of network agents.

Related posts

Leave a Comment